37 research outputs found

    Using Computer Vision to analyze nonmanuals across signed languages

    Get PDF
    publishedVersio

    Introduction

    Get PDF

    Analyzing Literary Texts in Lithuanian Sign Language with Computer Vision: A Proof of Concept

    Get PDF
    publishedVersio

    Phonetics of Negative Headshake in Russian Sign Language: A Small-Scale Corpus Study

    Get PDF
    We analyzed negative headshake found in the online corpus of Russian Sign Language. We found that negative headshake can co-occur with negative manual signs, although most of these signs are not accompanied by it. We applied OpenFace, a Computer Vision toolkit, to extract head rotation measurements from video recordings, and analyzed the headshake in terms of the number of peaks (turns), the amplitude of the turns, and their frequency. We find that such basic phonetic measurements of headshake can be extracted using a combination of manual annotation and Computer Vision, and can be further used in comparative research across constructions and sign languages.publishedVersio

    Information structure: theoretical perspectives

    Get PDF
    Under embargo until: 2022-09-12This chapter discusses the terminology commonly used in the information structure literature: in particular, topic, focus, contrast, and emphasis. An important component of our discussion is the impact of the visual-gestural modality on the syntactic and prosodic encoding of information structure. Kimmelman argued that in RSL and NGT, doubling is also used for information structure-related functions, but proposed that the functions of doubling are better described as foregrounding. Information structure is a field of linguistics covered in numerous books and articles. Information structure in sign languages has also been investigated almost from the first days of sign linguistics; however, as is often the case, most of the available studies focus on a very small number of sign languages, and among these, American Sign Language is the one most prominently represented. The chapter aims to theoretical research, It discusses the few available experimental or psycholinguistic studies on information structure in sign languages.acceptedVersio

    K-RSL: a Corpus for Linguistic Understanding, Visual Evaluation, and Recognition of Sign Languages

    Get PDF
    The paper presents the first dataset that aims to serve interdisciplinary purposes for the utility of computer vision community and sign language linguistics. To date, a majority of Sign Language Recognition (SLR) approaches focus on recognising sign language as a manual gesture recognition problem. However, signers use other articulators: facial expressions, head and body position and movement to convey linguistic information. Given the important role of non-manual markers, this paper proposes a dataset and presents a use case to stress the importance of including non-manual features to improve the recognition accuracy of signs. To the best of our knowledge no prior publicly available dataset exists that explicitly focuses on non-manual components responsible for the grammar of sign languages. To this end, the proposed dataset contains 28250 videos of signs of high resolution and quality, with annotation of manual and nonmanual components. We conducted a series of evaluations in order to investigate whether non-manual components would improve signs’ recognition accuracy. We release the dataset to encourage SLR researchers and help advance current progress in this area toward realtime sign language interpretation. Our dataset will be made publicly available at https:// krslproject.github.io/krsl-corpuspublishedVersio

    Automatic Classification of Handshapes in Russian Sign Language

    Get PDF
    Handshapes are one of the basic parameters of signs, and any phonological or phonetic analysis of a sign language must account for handshapes. Many sign languages have been carefully analysed by sign language linguists to create handshape inventories. This has theoretical implications, but also applied use, as an inventory is necessary for generating corpora for sign languages that can be searched, filtered, sorted by different sign components (such as handshapes, orientation, location, movement, etc.). However, creating an inventory is a very time-consuming process, thus only a handful of sign languages have them. Therefore, in this work we firstly test an unsupervised approach with the aim to automatically generate a handshape inventory. The process includes hand detection, cropping, and clustering techniques, which we apply to a commonly used resource: the Spreadthesign online dictionary (www.spreadthesign.com), in particular to Russian Sign Language (RSL). We then manually verify the data to be able to apply supervised learning to classify new data.publishedVersio

    Exploring Networks of Lexical Variation in Russian Sign Language

    Get PDF
    When describing variation at the lexical level in sign languages, researchers often distinguish between phonological and lexical variants, using the following principle: if two signs differ in only one of the major phonological components (handshape, orientation, movement, location), then they are considered phonological variants, otherwise they are considered separate lexemes. We demonstrate that this principle leads to contradictions in some simple and more complex cases of variation. We argue that it is useful to visualize the relations between variants as graphs, and we describe possible networks of variants that can arise using this visualization tool. We further demonstrate that these scenarios in fact arise in the case of variation in color terms and kinship terms in Russian Sign Language (RSL), using a newly created database of lexical variation in RSL. We show that it is possible to develop a set of formal rules that can help distinguish phonological and lexical variation also in the problematic scenarios. However, we argue that it might be a mistake to dismiss the actual patterns of variant relations in order to arrive at the binary lexical vs. phonological variant opposition.publishedVersio

    Functional Data Analysis of Non-manual Marking of Questions in Kazakh-Russian Sign Language

    Get PDF
    This paper is a continuation of Kuznetsova et al. (2021), which described non-manual markers of polar and wh-questions in comparison with statements in an NLP dataset of Kazakh-Russian Sign Language (KRSL) using Computer Vision. One of the limitations of the previous work was the distortion of the 3D face landmarks when the head was rotated. The proposed solution was to train a simple linear regression model to predict the distortion and then subtract it from the original output. We improve this technique with a multilayer perceptron. Another limitation that we intend to address in this paper is the discrete analysis of the continuous movement of non-manuals. In Kuznetsova et al. (2021) we averaged the value of the non-manual over its scope for statistical analysis. To preserve information on the shape of the movement, in this study we use a statistical tool that is often used in speech research, Functional Data Analysis, specifically Functional PCA.publishedVersio
    corecore